44 research outputs found

    A neural model for the visual tuning properties of action-selective neurons

    Get PDF
    SUMMARY: The recognition of actions of conspecifics is crucial for survival and social interaction. Most current models on the recognition of transitive (goal-directed) actions rely on the hypothesized role of internal motor simulations for action recognition. However, these models do not specify how visual information can be processed by cortical mechanisms in order to be compared with such motor representations. This raises the question how such visual processing might be accomplished, and in how far motor processing is critical in order to account for the visual properties of action-selective neurons.
We present a neural model for the visual processing of transient actions that is consistent with physiological data and that accomplishes recognition of grasping actions from real video stimuli. Shape recognition is accomplished by a view-dependent hierarchical neural architecture that retains some coarse position information on the highest level that can be exploited by subsequent stages. Additionally, simple recurrent neural circuits integrate effector information over time and realize selectivity for temporal sequences. A novel mechanism combines information about the shape and position of object and effector in an object-centered frame of reference. Action-selective model neurons defined in such a relative reference frame are tuned to learned associations between object and effector shapes, as well as their relative position and motion. 
We demonstrate that this model reproduces a variety of electrophysiological findings on the visual properties of action-selective neurons in the superior temporal sulcus, and of mirror neurons in area F5. Specifically, the model accounts for the fact that a majority of mirror neurons in area F5 show view dependence. The model predicts a number of electrophysiological results, which partially could be confirmed in recent experiments.
We conclude that the tuning of action-selective neurons given visual stimuli can be accounted for by well-established, predominantly visual neural processes rather than internal motor simulations.

METHODS: The shape recognition relies on a hierarchy of feature detectors of increasing complexity and invariance [1]. The mid-level features are learned from sequences of gray-level images depicting segmented views of hand and object shapes. The highest hierarchy level consists of detector populations for complete shapes with a coarse spatial resolution of approximately 3.7°. Additionally, effector shapes are integrated over time by asymmetric lateral connections between shape detectors using a neural field approach [2]. These model neurons thus encode actions such as hand opening or closing for particular grip types. 
We exploit gain field mechanism in order to implement the central coordinate transformation of the shape representations to an object-centered reference frame [3]. Typical effector-object-interactions correspond to activity regions in such a relative reference frame and are learned from training examples. Similarly, simple motion-energy detectors are applied in the object-centered reference frame and encode relative motion. The properties of transitive action neurons are modeled as a multiplicative combination of relative shape and motion detectors.

RESULTS: The model performance was tested on a set of 160 unsegmented sequences of hand grasping or placing actions performed on objects of different sizes, using different grip types and views. Hand actions and objects could be reliably recognized despite their mutual occlusions. Detectors on the highest level showed correct action tuning in more than 95% of the examples and generalized to untrained views. 
Furthermore, the model replicates a number of electrophysiological as well as imaging experiments on action-selective neurons, such as their particular selectivity for transitive actions compared to mimicked actions, the invariance to stimulus position, and their view-dependence. In particular, using the same stimulus set the model nicely fits neural data from a recent electrophysiological experiment that confirmed sequence selectivity in mirror neurons in area F5, as was predicted before by the model.

References
[1] Serre, T. et al. (2007): IEEE Pattern Anal. Mach. Int. 29, 411-426.
[2] Giese, A.M. and Poggio, T. (2003): Nat. Rev. Neurosci. 4, 179-192.
[3] Deneve, S. and Pouget, A. (2003). Neuron 37: 347-359.
&#xa

    Mirror neurons in monkey area F5 do not adapt to the observation of repeated actions

    Get PDF
    Repetitive presentation of the same visual stimulus entails a response decrease in the action potential discharge of neurons in various areas of the monkey visual cortex. It is still unclear whether this repetition suppression effect is also present in single neurons in cortical premotor areas responding to visual stimuli, as suggested by the human functional magnetic resonance imaging literature. Here we report the responses of 'mirror neurons' in monkey area F5 to the repeated presentation of action movies. We find that most single neurons and the population at large do not show a significant decrease of the firing rate. On the other hand, simultaneously recorded local field potentials exhibit repetition suppression. As local field potentials are believed to be better linked to the blood-oxygen-level-dependent (BOLD) signal exploited by functional magnetic resonance imaging, these findings suggest caution when trying to derive conclusions on the spiking activity of neurons in a given area based on the observation of BOLD repetition suppression

    SAR: Generalization of Physiological Agility and Dexterity via Synergistic Action Representation

    Full text link
    Learning effective continuous control policies in high-dimensional systems, including musculoskeletal agents, remains a significant challenge. Over the course of biological evolution, organisms have developed robust mechanisms for overcoming this complexity to learn highly sophisticated strategies for motor control. What accounts for this robust behavioral flexibility? Modular control via muscle synergies, i.e. coordinated muscle co-contractions, is considered to be one putative mechanism that enables organisms to learn muscle control in a simplified and generalizable action space. Drawing inspiration from this evolved motor control strategy, we use physiologically accurate human hand and leg models as a testbed for determining the extent to which a Synergistic Action Representation (SAR) acquired from simpler tasks facilitates learning more complex tasks. We find in both cases that SAR-exploiting policies significantly outperform end-to-end reinforcement learning. Policies trained with SAR were able to achieve robust locomotion on a wide set of terrains with high sample efficiency, while baseline approaches failed to learn meaningful behaviors. Additionally, policies trained with SAR on a multiobject manipulation task significantly outperformed (>70% success) baseline approaches (<20% success). Both of these SAR-exploiting policies were also found to generalize zero-shot to out-of-domain environmental conditions, while policies that did not adopt SAR failed to generalize. Finally, we establish the generality of SAR on broader high-dimensional control problems using a robotic manipulation task set and a full-body humanoid locomotion task. To the best of our knowledge, this investigation is the first of its kind to present an end-to-end pipeline for discovering synergies and using this representation to learn high-dimensional continuous control across a wide diversity of tasks.Comment: Accepted to RSS 202

    MyoDex: A Generalizable Prior for Dexterous Manipulation

    Full text link
    Human dexterity is a hallmark of motor control. Our hands can rapidly synthesize new behaviors despite the complexity (multi-articular and multi-joints, with 23 joints controlled by more than 40 muscles) of musculoskeletal sensory-motor circuits. In this work, we take inspiration from how human dexterity builds on a diversity of prior experiences, instead of being acquired through a single task. Motivated by this observation, we set out to develop agents that can build upon their previous experience to quickly acquire new (previously unattainable) behaviors. Specifically, our approach leverages multi-task learning to implicitly capture task-agnostic behavioral priors (MyoDex) for human-like dexterity, using a physiologically realistic human hand model - MyoHand. We demonstrate MyoDex's effectiveness in few-shot generalization as well as positive transfer to a large repertoire of unseen dexterous manipulation tasks. Agents leveraging MyoDex can solve approximately 3x more tasks, and 4x faster in comparison to a distillation baseline. While prior work has synthesized single musculoskeletal control behaviors, MyoDex is the first generalizable manipulation prior that catalyzes the learning of dexterous physiological control across a large variety of contact-rich behaviors. We also demonstrate the effectiveness of our paradigms beyond musculoskeletal control towards the acquisition of dexterity in 24 DoF Adroit Hand. Website: https://sites.google.com/view/myodexComment: Accepted to the 40th International Conference on Machine Learning (2023

    An Optogenetic Demonstration of Motor Modularity in the Mammalian Spinal Cord

    Get PDF
    Motor modules are neural entities hypothesized to be building blocks of movement construction. How motor modules are underpinned by neural circuits has remained obscured. As a first step towards dissecting these circuits, we optogenetically evoked motor outputs from the lumbosacral spinal cord of two strains of transgenic mice – the Chat, with channelrhodopsin (ChR2) expressed in motoneurons, and the Thy1, expressed in putatively excitatory neurons. Motor output was represented as a spatial field of isometric ankle force. We found that Thy1 force fields were more complex and diverse in structure than Chat fields: the Thy1 fields comprised mostly non-parallel vectors while the Chat fields, mostly parallel vectors. In both, most fields elicited by co-stimulation of two laser beams were well explained by linear combination of the separately-evoked fields. We interpreted the Thy1 force fields as representations of spinal motor modules. Our comparison of the Chat and Thy1 fields allowed us to conclude, with reasonable certainty, that the structure of neuromotor modules originates from excitatory spinal interneurons. Our results not only demonstrate, for the first time using optogenetics, how the spinal modules follow linearity in their combinations, but also provide a reference against which future optogenetic studies of modularity can be compared

    MyoSim:Fast and physiologically realistic MuJoCo models for musculoskeletal and exoskeletal studies

    Get PDF
    Owing to the restrictions of live experimentation, musculoskeletal simulation models play a key role in biological motor control studies and investigations. Successful results of which are then tried on live subjects to develop treatments as well as robot aided rehabilitation procedures for addressing neuromusculoskeletal anomalies ranging from limb loss, to tendinitis, from sarcopenia to brain and spinal injuries. Despite its significance, current musculoskeletal models are computationally expensive, and provide limited support for contact-rich interactions which are essential for studying motor behaviors in activities of daily living, during rehabilitation treatments, or in assistive robotic devices. To bridge this gap, this work proposes an automatic pipeline to generate physiologically accurate musculoskeletal, as well as hybrid musculoskeletal-exoskeletal models. Leveraging this pipeline we present MyoSim - a set of computationally efficient (over 2 orders of magnitude faster than state of the art) musculoskeletal models that support fully interactive contact rich simulation. We further extend MyoSim to support additional features that help simulate various real-life changes/diseases, such as muscle fatigue, and sarcopenia. To demonstrate the potential applications, several use cases, including interactive rehabilitation movements, tendon-reaffirmation, and the cosimulation with an exoskeleton, were developed and investigated for physiological correctness. Web-page: https://sites.google.com/view/myosuit

    Neural theory for the perception of causal actions

    Get PDF
    The efficient prediction of the behavior of others requires the recognition of their actions and an understanding of their action goals. In humans, this process is fast and extremely robust, as demonstrated by classical experiments showing that human observers reliably judge causal relationships and attribute interactive social behavior to strongly simplified stimuli consisting of simple moving geometrical shapes. While psychophysical experiments have identified critical visual features that determine the perception of causality and agency from such stimuli, the underlying detailed neural mechanisms remain largely unclear, and it is an open question why humans developed this advanced visual capability at all. We created pairs of naturalistic and abstract stimuli of hand actions that were exactly matched in terms of their motion parameters. We show that varying critical stimulus parameters for both stimulus types leads to very similar modulations of the perception of causality. However, the additional form information about the hand shape and its relationship with the object supports more fine-grained distinctions for the naturalistic stimuli. Moreover, we show that a physiologically plausible model for the recognition of goal-directed hand actions reproduces the observed dependencies of causality perception on critical stimulus parameters. These results support the hypothesis that selectivity for abstract action stimuli might emerge from the same neural mechanisms that underlie the visual processing of natural goal-directed action stimuli. Furthermore, the model proposes specific detailed neural circuits underlying this visual function, which can be evaluated in future experiments.Seventh Framework Programme (European Commission) (Tango Grant FP7-249858-TP3 and AMARSi Grant FP7-ICT- 248311)Deutsche Forschungsgemeinschaft (Grant GI 305/4-1)Hermann and Lilly Schilling Foundation for Medical Researc

    multi sensor signal processing for catastrophic tool failure detection in turning

    Get PDF
    Abstract This paper presents a methodology aimed at the identification of a catastrophic tool failure (CTF) in turning processes based on multiple sensor monitoring. Experimental turning tests were carried out under various cutting conditions (cutting speed, feed, depth of cut) using a multi-sensor monitoring system consisting of a triaxial force sensor to acquire the three components of the cutting force and an acoustic emission sensor. Signals analysis, interpretation and processing was performed on the multi-sensor signals acquired during the turning process and relevant statistical features were extracted and used to develop a methodology for the automatic CTF detection during turning

    View-Based Encoding of Actions in Mirror Neurons of Area F5 in Macaque Premotor Cortex

    Get PDF
    SummaryConverging experimental evidence indicates that mirror neurons in the monkey premotor area F5 encode the goals of observed motor acts [1–3]. However, it is unknown whether they also contribute to encoding the perspective from which the motor acts of others are seen. In order to address this issue, we recorded the visual responses of mirror neurons of monkey area F5 by using a novel experimental paradigm based on the presentation of movies showing grasping motor acts from different visual perspectives. We found that the majority of the tested mirror neurons (74%) exhibited view-dependent activity with responses tuned to specific points of view. A minority of the tested mirror neurons (26%) exhibited view-independent responses. We conclude that view-independent mirror neurons encode action goals irrespective of the details of the observed motor acts, whereas the view-dependent ones might either form an intermediate step in the formation of view independence or contribute to a modulation of view-dependent representations in higher-level visual areas, potentially linking the goals of observed motor acts with their pictorial aspects
    corecore